视频分类的视听广义零拍学习需要了解音频和视觉信息之间的关系,以便能够在测试时识别出新颖的,以前看不见的类别的样本。可以利用视频数据中音频和视觉数据之间的自然语义和时间对齐,以学习在测试时概括以概括为看不见类的强大表示。我们为音频概括的零拍学习提供了一个多模式和时间跨注意框架(\ modelname)。它的输入是从预先训练的网络获得的时间对齐音频和视觉功能。鼓励该框架专注于跨时间的跨模式对应关系,而不是在模式中的自我注意力,从而显着提高了表现。我们表明,我们提出的框架摄入时间功能会在\ ucf,\ vgg和\ \ \ \ \ \ \ \ \ vistion基准测试基准上获得最新的性能。复制所有结果的代码可在\ url {https://github.com/explainableml/tcaf-gzsl}上获得。
translated by 谷歌翻译
Sunquakes are seismic emissions visible on the solar surface, associated with some solar flares. Although discovered in 1998, they have only recently become a more commonly detected phenomenon. Despite the availability of several manual detection guidelines, to our knowledge, the astrophysical data produced for sunquakes is new to the field of Machine Learning. Detecting sunquakes is a daunting task for human operators and this work aims to ease and, if possible, to improve their detection. Thus, we introduce a dataset constructed from acoustic egression-power maps of solar active regions obtained for Solar Cycles 23 and 24 using the holography method. We then present a pedagogical approach to the application of machine learning representation methods for sunquake detection using AutoEncoders, Contrastive Learning, Object Detection and recurrent techniques, which we enhance by introducing several custom domain-specific data augmentation transformations. We address the main challenges of the automated sunquake detection task, namely the very high noise patterns in and outside the active region shadow and the extreme class imbalance given by the limited number of frames that present sunquake signatures. With our trained models, we find temporal and spatial locations of peculiar acoustic emission and qualitatively associate them to eruptive and high energy emission. While noting that these models are still in a prototype stage and there is much room for improvement in metrics and bias levels, we hypothesize that their agreement on example use cases has the potential to enable detection of weak solar acoustic manifestations.
translated by 谷歌翻译